Remote Sensing and Sensors|24 Article(s)
Block Image Enhancement Method Based on Global Adaptive Processing
Bin ZUO, Qiang XU, Ran PANG, Jinlong XIE, Yuwei ZHAI, and Fang GAO
Aiming at the problem that the background noise is excessively enhanced by the existing panchromatic remote sensing image enhancement algorithms when the object is enhanced in the image, a block image enhancement method based on global adaptive processing is proposed. Firstly, block processing is carried out and image enhancement parameters of each image block in the image are calculated respectively. Remote sensing images contain many types of ground objects, and the difference between different ground objects is great. If unified enhancement parameters are used to enhance the whole image, the enhancement effect of some ground objects is often not ideal. Local enhancement can solve the above problems well. This method mainly applies to a small area near the target object, and can fully use the image's dynamic range to represent the change of image grayscale. Next, global adaptive processing is carried out. Global adaptive enhancement parameters are calculated and used to modify the enhancement parameters of noise blocks. This paper proposes a global adaptive enhancement technology based on a grayscale histogram for global adaptive processing of locally enhanced images, which can obtain the grayscale distribution of background objects in the image. This method helps to determine the brightness of the target relative to the background, and after determining the relative brightness, the enhancement parameters with better enhancement effect can be determined. Then, the adjacent image blocks are merged by building block difference factor, and the image blocks are classified as detail blocks and noise blocks. Local block enhancement will lead to more highlighting noise in some image blocks. In order to accurately identify these image blocks and use global adaptive enhancement parameters based on grayscale histogram to correct these noise blocks, this study calculated the block difference factor between adjacent image blocks. According to this, the information of texture and brightness between adjacent image blocks can be judged whether the abrupt change occurs at the boundary of the block. Finally, the parameters of each pixel are obtained by interpolating the enhancement parameters based on image blocks, and panchromatic remote sensing images are enhanced according to these parameters. The proposed method is used for panchromatic remote sensing images with different scenes, and the effects of various enhancement methods are evaluated based on various indexes. It is found that the proposed enhancement methods have good performance. By comparing the enhancement results of different algorithms for remote sensing images near ports, it can be seen that the proposed algorithm can reduce the dynamic range of images and effectively enhance the details of objects in the image, which can improve the clarity of ship objects in remote sensing images near ports. By comparing the enhancement results of cloud coverage images by different methods, it can be seen that the proposed algorithm can effectively suppress cloud interference on ship targets in the images. Even if some ships are covered by clouds, the contrast of grayscale of the image pixels around the target can be increased as much as possible by this algorithm, which corrects the shortcomings of over-enhancement or under-enhancement in the cloud covered image processing by previous methods. This indicates that the proposed algorithm can effectively solve the problem of over-enhancement of background noise while enhancing the details of target objects in the image by existing panchromatic remote sensing image enhancement algorithms. The average running time of different algorithms is also calculated, and it is found that the average running time of the proposed algorithm is 0.14 s, which is slightly less than the image processing time of the contrast enhancement method. The method can eliminate the residual error of the existing radiometric correction processing to a certain extent and make the same object in different images comparable.
Acta Photonica Sinica
  • Publication Date: Apr. 25, 2023
  • Vol. 52, Issue 4, 0428003 (2023)
Remote Sensing Image Fusion Based on Two-branch U-shaped Transformer
Wensheng FAN, Fan LIU, and Ming LI
Multi-spectral images are key references for earth observation. However, capturing rich spectral information introduces limited spatial resolution in multi-spectral imaging. To overcome the trade-off between spatial resolution and spectral resolution in remote sensing, panchromatic images with high spatial resolution but poor spectral information are adopted to complement multi-spectral imagery. As a result, the technique of fusing high-resolution panchromatic images and low-resolution multi-spectral images, namely pan-sharpening, is developed and facilitates various remote sensing application. Existing pan-sharpening methods can be roughly divided into four main categories: component substitution, multi-resolution analysis, variational optimization and deep learning. Each category has its own fusion strategy. Recently, a number of deep-learning-based methods are developed and obtain superior performance on fusion quality. These methods are typically based on convolutional neural networks and even combine the idea of generative adversarial networks. However, the inadequate extraction of global contextual and multi-scale features always leads to a loss of spectral information and spatial details. To solve this problem, a two-branch u-shaped transformer is proposed in this paper. Firstly, the multi-spectral and panchromatic images to be fused are partitioned into non-overlapping patches with fixed patch sizes, and each patch is embedded into a vector. The embedding vectors have the same feature dimension and contain the rich spectral and spatial information of the image patches. Subsequently, the embedding vectors of the multi-spectral and panchromatic images are fed into the two branches of the transformer encoder to extract hierarchical feature representations, respectively. The encoder consists of shifted windowing transformer blocks and patch emerging layers. Therefore, it can fully extract global and multi-scale features. In the encoding process, hierarchical panchromatic feature representations are injected into multi-spectral feature representations to obtain hierarchical fused feature representations. Besides, the high-level features are further fused through a transformer-based bottleneck. The transformer decoder progressively up-samples the high-level fused feature representation via patch expanding layers and suppresses redundant features via feature compression layers. In the decoding process, the hierarchical representations from the encoder are aggregated with the high-level fused feature representation via skip connections to avoid information loss. Finally, the decoder produces a high-resolution fused feature representation. Rearrangement and transposed convolution operations are used to reconstruct the desired high-resolution multi-spectral image from embedded patches. To validate the effectiveness of the proposed method, extensive experiments are conducted on three datasets acquired by Gaofen-2, QuickBird and WorldView-3 satellites. Since the ground-truth high-resolution multi-spectral image is non-existent, the multi-spectral and panchromatic images are spatially degraded according to the Wald's protocol, and the original multi-spectral images can be used as the reference images to supervise the training of the proposed network. The proposed network is trained for 500 epochs by using an AdamW optimizer. The mean absolute error between the reference image and the fusion result is used as a loss function to guide the optimization of the proposed network. To evaluate the fusion results, four full-reference indices are adopted for testing at the reduced resolution. One no-reference index with its spectral distortion and spatial distortion components is used for testing at the full resolution. Since the feature dimension of embedding vectors and the size of partitioned patches are important hyper-parameters that affect the performance and the computational complexity of the proposed method. Several model variants are built to observe the impact of the two hyper-parameters. The variant with an embedding vector dimension of 192 and a patch size of 4 has the best fusion results. However, compared with its huge computational cost, the improvement of fusion results is relatively limited. Therefore, the embedding vector dimension is set to 128 and the patch size is set to 4 in this paper. Subsequently, the proposed method is compared with eight widely used fusion methods to verify its effectiveness. With the original multi-spectral image as the reference image, the methods are firstly compared on images acquired by the three satellites at reduced resolution. Through visual results and residual maps between the fusion results and the reference image, it can be observed that the proposed method obtains the best visual quality and the smallest errors. As for quantitative evaluation via the objective indices, the proposed method also has the best quantitative results in terms of all the indicators on all the three kinds of testing data. Next, all the methods are compared on the original images acquired by the three satellites at full resolution. The visual result of the proposed method shows better preservation of both spectral information and spatial details than those of other methods. As for objective quantitative evaluation, the proposed method obtains the best results in terms of all the metrics on the Gaofen-2 data. On the QuickBird and WorldView-3 data, the proposed method shows better values than other methods on the spatial and overall indices. In conclusion, the reduced-resolution and full-resolution experimental results on the three data sets demonstrate that the proposed method outperforms other methods in terms of both subjective visual effect and quantitative metrics.
Acta Photonica Sinica
  • Publication Date: Apr. 25, 2023
  • Vol. 52, Issue 4, 0428002 (2023)
Image Quality Evaluation Method for Optical Remote Sensing Satellite Based on an Array of Point Sources
Yujun ZHENG, Weiwei XU, Xin LI, Xiaolong SI, Baoyun YANG, and Liming ZHANG
Ground pixel resolution and modulation transfer function are two important parameters for image quality evaluation of high spatial resolution optical remote sensing satellites, which are of great significance in target recognition, image interpretation, and information extraction. We present an image quality evaluation method for remote sensor using an array of point sources, which takes the light, small, and automated reflected point source array as reference. Two image quality evaluation parameters of remote sensor ground pixel resolution and modulation transfer function can be obtained at the same time. The two-dimensional Gaussian model is used to describe the point spread characteristic of the optical remote sensing satellite imaging system. Selecting the 5×5 pixel values of each reflective point source remote sensing image into the point spread function model and use the least squares method to fit the two-dimensional Gaussian surface to obtain the image point coordinates. According to the ground pixel resolution detection principle and combined with the ground point source position measurement, the ground pixel resolution of the remote sensor is obtained. Based on the image point coordinates, all point source image data are positionally registered, and the image data after data rearrangement is fitted again with the two-dimensional Gaussian surface to obtain oversampled, sub-pixel interpolated optical remote sensing satellite imaging system point spread function, and then obtain the system modulation transfer function. In the image quality evaluation test of ZY-3 satellite based on reflective point sources, a 4×4 reflective point source array with non-integer pixel intervals was concentrated on the calibration test site along the flight direction and the linear array direction. The distance between adjacent point sources is 10.25 pixels, and the distance between two point sources is 20.5 pixels. The point spread function of the optical remote sensing imaging system can be sub-pixel interpolated to 0.25 pixels, which can effectively overcome the sampling effect of the imaging system and suppress the influence of random noise. According to the principle of collinearity test, the sum of the centers of adjacent reflection point sources should be equal to the distance between the centers of phase reflection point sources. The test results show that the error of the collinearity test results between the optical remote sensing satellite detector linear array and the flight direction is less than 0.002 pixels. It shows that the extraction result of array point source image point has high precision and accuracy. The standard deviations of the ground pixel resolution detection results in the linear array direction and flight direction of optical remote sensing satellite detectors are 0.020 6 and 0.021 5, relative deviations are 6.4‰ and 6.1‰, which shows that the point source image pixel extraction accuracy is high and the linear array detection element of the remote sensor has good rigidity and stability in a local area. Comparing the results of the on-orbit modulation transfer function detection of the ZY-3 satellite by the array point sources method and the double-edge method, the difference between the two methods in the flight direction and the linear array direction is 0.000 2 and 0.012 6. Compared with periodic targets, array point sources have the characteristics of light weight, miniaturization, and automation. In the process of quantitative detection of the ground pixel resolution of optical remote sensing satellites, the array point source method is not affected by the subjective factors of image interpreters and can suppress the influence of atmospheric and random noise. The array point source method is a two-dimensional modulation transfer function direct detection method according to the physical definition of modulation transfer function, which can intuitively describe the point spread characteristic of the photoelectric remote sensing imaging system. The array point sources can also be used as a reference target for the radiometric calibration of optical remote sensing satellites. The on-orbit radiometric calibration of remote sensors can be realized by setting up a multi-level point source array on the ground, combined with the measurement of atmospheric optical characteristic parameters. Therefore, the array point sources can comprehensively realize the image quality evaluation and radiometric calibration of optical remote sensing satellites.
Acta Photonica Sinica
  • Publication Date: Apr. 25, 2023
  • Vol. 52, Issue 4, 0428001 (2023)
Design of Large Depth Field Photon Doppler Velocimeter and Application in Ultra-high Speed Interior Ballistic Research
Geyang HAO, Qing LUO, Yahan YANG, Zhaochao YAN, Guojun WU, and Jie HUANG
The Photon Doppler Velocimeter (PDV) is a non-contact velocity measurement equipment with high accuracy and high-time resolution, which can obtain the continuous interior ballistic velocity of ultra-high-speed launchers. Continuous velocity data is very important for ultra-high-speed experiments. It can be used to understand the performance of ultra-high-speed launchers and the physical processes of ultra-high-speed, as well as to develop the theory of interior ballistics. Limited by the small size of the muzzle, the serious attenuation of laser energy and (the limitation of) the bandwidth of detector, it is difficult for ordinary PDV to obtain continuous ultra-high-speed interior ballistic velocity. In this paper, we have developed a large depth field PDV with an effective working distance greater than 7 m, which is constructed based on fiber Mach-Zehnder interferometer. The emission aperture of optical antenna is 25 mm, the beam waist of emission position is located at 3.3~3.4 m, and the diameter of beam waist is 1 245 mm. In order to verify the performance of the system, we first simulated the high-speed motion process by using a rotating turntable and a motor, and tested the measurement error of the PDV system. In the velocity range of 1~40 m/s, the measurement uncertainty of the PDV can be controlled at 2.48%. Then we carried out experiments on the ultra-high-speed ballistic target (FD-18A) of China Aerodynamics Research and Development Center (CARDC), and repeatedly obtained the continuous ultra-high-speed inner ballistic velocity of the ultra-high-speed two-stage light gas guns. In the experiments, we placed a reflector directly behind the muzzle to change the direction of the laser signal and put the optical antenna on one side of the reflector. Finally, the PDV recorded the velocity changes of the launch model from static acceleration to about 2 km/s and 7 km/s, with the maximum velocity of 6.89 km/s. By comparing with the numerical simulation results, it is found that the measured velocity of experiment is lower than the simulation velocity in the test with a velocity of 2 km/s. While the measured velocity of experiment is higher than the simulation speed in the test with a velocity of 7 km/s, and the deviations are -20.11%, -23.7% and +9.15%, respectively. Through the analysis of velocity-acceleration data, it is found that the difference in friction between simulation and experiment may be the main reason for the difference of velocity. The actual friction force of the ultra-high-speed projectile in the ballistic target is greater than the theoretical friction force given in the simulation, so it may cause that the maximum speeds and accelerations are lower than the theoretical results in the test with an estimated launch velocity of 2 km/s. In the test with a velocity of 7 km/s, the mass of the projectile decreases rapidly due to severe friction, so the maximum velocity and acceleration in the second half of the movement are gradually larger than the simulation results.
Acta Photonica Sinica
  • Publication Date: Jun. 25, 2022
  • Vol. 51, Issue 6, 0628002 (2022)
Depth Map Reconstruction Based on a Computational Model of a Chaotic Laser Ranging System
Bowen JIANG, Tao YUE, and Xuemei HU
Radar is a sensor that uses electromagnetic waves for detection and ranging. The Light Radar (LIDAR) has been widely applied in many fields, such as robotics, ocean detection, atmospheric detection, intelligent driving, etc. Recently, LIDAR, based on the aperiodic random signal, has aroused great attention. The chaotic signal is one of the various aperiodic random signals, and the LIDAR systems taking the chaotic signal as the detection signal are named chaotic laser ranging systems. Considerable simulation and experimental results have illustrated that this kind of LIDAR system can perform attractive qualities, such as anti-jamming properties, high precision (mm-level), and multi-target real-time ranging ability. Nevertheless, up to now, existing work has not proposed a simulation model based on a realistic physical processes for chaotic laser ranging systems yet; also, there is no work that quantitatively analyzes the main degradation factors affecting the accuracy of chaotic laser ranging systems and the quality of reconstructed depth maps. In order to solve the problems above, a computational model based on the physical process for chaotic laser ranging systems is proposed in this paper. The computational model comprehensively considers various factors which will possibly cause chaotic signal degradation and ranging error during the realistic ranging process, including atmospheric attenuation, atmospheric turbulence, geometric attenuation, surface information of the object and its Bidirectional Reflection Distribution Function (BRDF), multipath noise, ambient noise, thermal noise and the degradation model of photodiodes. The program of the computational model is implemented with MATLAB. Among various degradation factors, there are three factors that require special awareness which namely BRDF, ambient noise, and multipath noise. In order to explore the influence of these three degradation factors on the accuracy of depth map reconstruction, this paper further uses the discrete chaotic sequence generated from the simulated Chua's chaotic circuit as the detection signal to scan the synthesized depth images and reconstructs depth maps by means of the cross-correlation mathematical method. To comprehensively assess the quality of the depth maps reconstructed under different degradation factors and degradation levels, we not only calculate the Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity (SSIM) between the reconstruction result and ground truth but also visualize both the reconstructed and ground-truth depth images. The experimental results show that the quality of depth maps reconstructed by the chaotic laser simulation ranging system is slightly affected by the roughness coefficient in BRDF model, but as the roughness coefficient increases, the influence of multipath noise will become non-negligible. Additionally, the chaotic laser simulation ranging system studied in this paper performs satisfying robustness against ambient noise when it is not extremely intense. However, the chaotic ranging system is relatively sensitive to multipath noise, i.e., even when the multipath noise is not intense, the depth map reconstruction quality will decrease rapidly. Therefore, when designing a realistic chaotic laser ranging system in practice, it is necessary to take the reflection and geometric property of the object and the influence of multipath noise into careful consideration. In conclusion, the computational model of a chaotic laser ranging system proposed and analyzed in this paper can serve as an important reference for analyzing the degradation factors affecting the ranging quality before designing and implementing a practical chaotic laser ranging system. Moreover, with the help of this computational model, it is possible for researchers to quickly and efficiently generate synthetic chaotic laser ranging datasets similar to the data measured in a realistic environment.
Acta Photonica Sinica
  • Publication Date: Jun. 25, 2022
  • Vol. 51, Issue 6, 0628001 (2022)
Design of Optical System of Infrared Star Simulator Uniform Radiation Under Specific Irradiance
Ru ZHENG, Changyu LI, Yue GAO, Guangxi LI, Bo LIU, and Kuo SUN
As the key equipment of the starlight navigation system, the starlight direction finder is mainly used to obtain the accurate spatial angular position of natural stars, which is used for high-precision attitude calculation and navigation and positioning of missiles, ships and space satellites. Its performance largely determines the performance of the starlight navigation system. In the visible light band, the starlight direction finder is easily affected by the background radiation generated by the sun scattering in the atmosphere. According to Rayleigh scattering theory, the infrared band has a smaller scattering intensity than the visible light band, and the radiation energy transmitted through the atmosphere is strong. Practice shows that it is much more efficient to use the near-infrared band to measure stars in daytime than the visible light band to realize navigation and positioning. Therefore, developing high-precision infrared star simulator and realizing infrared star map simulation has become a new content to improve the efficiency of starlight navigation and the accuracy of star point detection. In order to realize the ground calibration of starlight director and meet the three simulation requirements of radiation uniformity, specific irradiance and spectrum, this paper proposes an optical system design scheme of infrared star simulator with uniform radiation under specific irradiance. According to the requirements of irradiance simulation and uniform irradiation, the radiation flux transfer model of light source system is established, and a uniform radiation light source system is designed to meet the requirements of radiation spectrum. According to the docking requirements of starlight direction finder and infrared star simulator, a transmission collimating optical system with high imaging quality is designed. Combined with the design results, Tracepro software is used to model the optical system, and the irradiance and uniformity of the radiation surface of the light source system and the exit of the optical system are simulated and analyzed. The irradiance and uniformity of the radiation surface of the light source are tested experimentally with an irradiance meter to verify the accuracy of the theoretical analysis. The measured results show that the irradiance of the radiation surface of the light source meets the index requirements, and the irradiance unevenness is 2.75%, which meets the development requirements of the infrared star simulator.
Acta Photonica Sinica
  • Publication Date: Apr. 25, 2022
  • Vol. 51, Issue 4, 0428002 (2022)
Advances and Prospects of Laser Measurement Technology for Air Motion Parameters(Invited)
Yazhou YUE, Bin LI, and Hongjie LEI
The air motion parameters (such as air speed, angle of attack, angle of sideslip, etc.) of an aircraft, which are important source parameters for flight control, navigation, and mission decision making, are usually measured by traditional airborne air data system, and can be used in flight stability control, accurate navigation and precise weapon launch. But the traditional air data system can no longer meet the performance requirements of modern military and civil aircraft in maneuverability, stealth, reliability, safety, comfort and economy due to the performance defects, such as measurement failure at low air speed and large maneuvers, significant aerodynamic delay, pitot tube icing, poor stealth performance, susceptibility to aircraft air turbulence, requiring complex compensation, etc., which result from its mechanical, nonlinear, near the fuselage measurement characteristics. To overcome the defects of the traditional air data system, the method using laser technology for measuring air motion parameters was first developed abroad, which can completely solve the defects of traditional airborne air data system due to its characteristics of high accuracy, high linearity, measurement far away form fuselage, embedded installation, etc..In this paper, the principle, characteristics, advantages and disadvantages of the traditional method and laser method for measuring air motion parameters are compared and analyzed. The obvious advantages and advancements of the laser measurement technology for air motion parameters make it a research hotspot at home and abroad. Throughout the technology development, the laser measurement technology for air motion parameters can be categorized two schemes including direct detection and coherent detection. The principle and composition of the two technical schemes are presented. Then the advantages and disadvantages of the two technical schemes are compared.The two technical schemes of laser measurement technology for air motion parameters, which have their advantages respectively, are developed synchronously nowadays and have been widely used. The application areas mainly include three aspects: accurate measurement of air motion parameters, flight calibration of conventional air data system and detection of wind shear and turbulence ahead of the aircraft. In this article, the development and application are mainly reviewed in these three aspects, and the achievements of prototype and flight test results in these three application aspects are provided. As can be seen from the research reports, the research institutes are mainly concentrated in Europe and America, including OADS corporation, Ophir corporation, Michigan Aerospace corporation, Thales corporation, ONERA, EADS, DLR, etc. Many flight tests have been carried out and a large number of test data have been accumulated by these institutes. At present, the prototype which can be equipped and applied has been successfully developed abroad. In contrast, domestic research is relatively backward, the main research institutions are AVIC CAIC and AVIC FACRI. In recent years, the principle prototype has been reported by the two research institutes respectively.The development direction of the two schemes of air motion parameters measurement technology are prospected respectively. The coherent detection scheme is likely to be first equipped for airborne application due to its low size, weight and power property. The detection capability of the coherent detection scheme at high altitude up to tens of kilometers needs to be continuously improved. The direct detection scheme should further reduce the size, weight, power and costs to meet the requirements of airborne applications. The application of quantum technology in laser measurement technology for air motion parameters is prospected. In the future, single photon detection technology and quantum enhancement technology based on compressed state photon are expected to be applied to improve the detective sensitivity of the system and achieve ultra-sensitive detection. In view of the huge development gap between home and abroad, some useful suggestions are provided for domestic research of laser measurement technology for air motion parameters, such as robust related industrial chain, focus on low SWaP performance design, strengthen cooperation, increase capital investment, etc.The purpose of review and further clarification of the principle, application and development trend of laser measurement technology for air motion parameters is to provide a useful reference and new ideas for the researchers engaged in prototype and application research, and to promote the in-depth application of laser measurement technology for air motion parameters in aviation fields.
Acta Photonica Sinica
  • Publication Date: Apr. 25, 2022
  • Vol. 51, Issue 4, 0428001 (2022)
Hydrogen Sensor Based on Isopropanol Filling and Vernier Effect Sensitization
Xun MAO, You WANG, and Yutang DAI
The sensitivity of the Fabry Perot hydrogen interferometer based on Pt-WO3 is greatly improved by using isopropanol with high thermal optical coefficient and parallel connection structure. The isopropanol cavity of the sensor is composed of a hollow fiber with an aperture of 126 μm and a single-mode optical fiber coated with silver film on its end face. The results of hydrogen sensitivity test show that the sensitivity of the interferometer in the range of 0~2% (vol%) hydrogen concentration is 1.746 4 nm/%, fast response and good reusability. Two interferometers with small cavity length difference are connected in parallel through a 2×2 coupler, and the sensitivity is amplified by using the optical vernier effect. The combined sensor has a high hydrogen sensitivity of 15.729 3 nm/% and the two interferometers can achieve temperature self compensation, which greatly reduces the temperature cross sensitivity.This research provides a useful exploration for the preparation of hydrogen sensors with high sensitivity, low cost and wide application range.
Acta Photonica Sinica
  • Publication Date: May. 25, 2021
  • Vol. 50, Issue 5, 194 (2021)
Polarized Light/binocular Vision Bionic Integrated Navigation Method
Jinkui CHU, Jianhua CHEN, Jinshan LI, Kun TONG, Jin LI, and Hanpei HU
A bionic integrated navigation method of polarized light/binocular vision is proposed to realize the low-cost, high-precision, strong robustness and completely autonomous navigation for intelligent mobile robot under complicated disturbing environment. Firstly, based on graph-based optimization, a tightly-coupled navigation algorithm is designed. By constructing the optimization function, the data of polarization sensor and binocular vision sensor are fused. Then, the experimental platform of the bionic integrated navigation method is built. Finally, the performance of the bionic integrated navigation method is tested through the outdoor vehicle carrying experiment, and compared with the traditional vision algorithm. The results show that the heading angle accuracy is improved by 38.9% and the position accuracy is improved by 8.9% compared with the traditional vision algorithm. The proposed method can reduce the heading angle error of vision algorithm and improve the robustness. Moreover, the bionic polarization sensor has the advantages of good real-time performance and strong anti-interference ability, which can meet the accuracy and reliability requirements of outdoor ground carrier navigation. The proposed method uses two kinds of bionic navigation methods, which make comprehensive use of the advantages of biological navigation.
Acta Photonica Sinica
  • Publication Date: May. 25, 2021
  • Vol. 50, Issue 5, 184 (2021)
Optimization Algorithm for Polarization Remote Sensing Cloud Detection Based on Machine Learning
Jiejun WANG, Shaohui LIU, Shu LI, Song YE, Xinqiang WANG, and Fangyuan WANG
The polarization remote sensing experience threshold cloud detection algorithm is strongly affected by subjective factors, and it is very easy to have the problem of inaccurate cloud detection over bright ground. In response to this problem, this paper proposes a machine learning cloud detection algorithm that combines active and passive remote sensing satellites. The algorithm is based on the multi-channel multi-angle polarization characteristics of the POLDER3 payload and the high-precision cloud vertical characteristics of the CALIOP payload. It uses POLDER3 payload and CALIOP. The load observation overlaps the regional data, and the BP neural network optimized by the Particle Swarm Optimization algorithm is built to train the cloud detection model. Based on the cloud detection training model, a cloud detection experiment was carried out using POLDER3 level-1 data. The experiment showed that the cloud detection result of this algorithm is 92.46% consistent with the MODIS cloud detection product, which is higher than the consistency between the official POLDER3 cloud detection product and the MODIS cloud detection product 83.13%. By comparing the experimental results of the algorithm in this paper with the optical characteristics of different pixels from the official POLDER3 cloud detection product, it is found that compared with the official POLDER3 algorithm, this algorithm is more sensitive to thin clouds over the bright surface and can perform cloud detection more effectively.
Acta Photonica Sinica
  • Publication Date: Feb. 25, 2021
  • Vol. 50, Issue 2, 166 (2021)